82 research outputs found

    Object-based flood analysis using a graph-based representation

    Get PDF
    The amount of freely available satellite data is growing rapidly as a result of Earth observation programmes, such as Copernicus, an initiative of the European Space Agency. Analysing these huge amounts of geospatial data and extracting useful information is an ongoing pursuit. This paper presents an alternative method for flood detection based on the description of spatio-temporal dynamics in satellite image time series (SITS). Since synthetic aperture radar (SAR) satellite data has the capability of capturing images day and night, irrespective of weather conditions, it is the preferred tool for flood mapping from space. An object-based approach can limit the necessary computer power and computation time, while a graph-based approach allows for a comprehensible interpretation of dynamics. This method proves to be a useful tool to gain insight in a flood event. Graph representation helps to identify and locate entities within the study site and describe their evolution throughout the time series

    A visualization tool for flood dynamics monitoring using a graph-based approach

    Get PDF
    Insights into flood dynamics, rather than solely flood extent, are critical for effective flood disaster management, in particular in the context of emergency relief and damage assessment. Although flood dynamics provide insight in the spatio-temporal behaviour of a flood event, to date operational visualization tools are scarce or even non-existent. In this letter, we distil a flood dynamics map from a radar satellite image time series (SITS). For this, we have upscaled and refined an existing design that was originally developed on a small area, describing flood dynamics using an object-based approach and a graph-based representation. Two case studies are used to demonstrate the operational value of this method by visualizing flood dynamics which are not visible on regular flood extent maps. Delineated water bodies are grouped into graphs according to their spatial overlap on consecutive timesteps. Differences in area and backscatter are used to quantify the amount of variation, resulting in a global variation map and a temporal profile for each water body, visually describing the evolution of the backscatter and number of polygons that make up the water body. The process of upscaling led us to applying a different water delineation approach, a different way of ensuring the minimal mapping unit and an increased code efficiency. The framework delivers a new way of visualizing floods, which is straightforward and efficient. Produced global variation maps can be applied in a context of data assimilation and disaster impact management

    Flood mapping in vegetated areas using an unsupervised clustering approach on Sentinel-1 and-2 imagery

    Get PDF
    The European Space Agency's Sentinel-1 constellation provides timely and freely available dual-polarized C-band Synthetic Aperture Radar (SAR) imagery. The launch of these and other SAR sensors has boosted the field of SAR-based flood mapping. However, flood mapping in vegetated areas remains a topic under investigation, as backscatter is the result of a complex mixture of backscattering mechanisms and strongly depends on the wave and vegetation characteristics. In this paper, we present an unsupervised object-based clustering framework capable of mapping flooding in the presence and absence of flooded vegetation based on freely and globally available data only. Based on a SAR image pair, the region of interest is segmented into objects, which are converted to a SAR-optical feature space and clustered using K-means. These clusters are then classified based on automatically determined thresholds, and the resulting classification is refined by means of several region growing post-processing steps. The final outcome discriminates between dry land, permanent water, open flooding, and flooded vegetation. Forested areas, which might hide flooding, are indicated as well. The framework is presented based on four case studies, of which two contain flooded vegetation. For the optimal parameter combination, three-class F1 scores between 0.76 and 0.91 are obtained depending on the case, and the pixel- and object-based thresholding benchmarks are outperformed. Furthermore, this framework allows an easy integration of additional data sources when these become available

    Heat risk assessment for the Brussels capital region under different urban planning and greenhouse gas emission scenarios

    Get PDF
    Urban residents are exposed to higher levels of heat stress in comparison to the rural population. As this phenomenon could be enhanced by both global greenhouse gas emissions (GHG) and urban expansion, urban planners and policymakers should integrate both in their assessment. One way to consider these two concepts is by using urban climate models at a high resolution. In this study, the influence of urban expansion and GHG emission scenarios is evaluated at 100 m spatial resolution for the city of Brussels (Belgium) in the near (2031-2050) and far (2081-2100) future. Two possible urban planning scenarios (translated into local climate zones, LCZs) in combination with two representative concentration pathways (RCPs 4.5 and 8.5) have been implemented in the urban climate model UrbClim. The projections show that the influence of GHG emissions trumps urban planning measures in each period. In the near future, no large differences are seen between the RCP scenarios; in the far future, both heat stress and risk values are twice as large for RCP 8.5 compared to RCP 4.5. Depending on the GHG scenario and the LCZ type, heat stress is projected to increase by a factor of 10 by 2090 compared to the present-day climate and urban planning conditions. The imprint of vulnerability and exposure is clearly visible in the heat risk assessment, leading to very high levels of heat risk, most notably for the North Western part of the Brussels Capital Region. The results demonstrate the need for mitigation and adaptation plans at different policy levels that strive for lower GHG emissions and the development of sustainable urban areas safeguarding livability in cities

    Fusion of pixel-based and object-based features for classification of urban hyperspectral remote sensing data

    Get PDF
    Hyperspectral imagery contains a wealth of spectral and spatial information that can improve target detection and recognition performance. Typically, spectral information is inferred pixel-based, while spatial information related to texture, context and geometry are deduced on a per-object basis. Existing feature extraction methods cannot fully utilize both the spectral and spatial information. Data fusion by simply stacking different feature sources together does not take into account the differences between feature sources. In this paper, we propose a feature fusion method to couple dimension reduction and data fusion of the pixel- and object-based features of hyperspectral imagery. The proposed method takes into account the properties of different feature sources, and makes full advantage of both the pixel- and object-based features through the fusion graph. Experimental results on classification of urban hyperspectral remote sensing image are very encouraging

    Multi-date Sentinel1 SAR image textures discriminate perennial agroforests in a tropical forest-savannah transition landscape

    Get PDF
    Synthetic Aperture Radar (SAR) provides consistent information on target land features; especially in tropical conditions that restrain penetration of optical imaging sensors. Because radar response signal is influenced by geometric and di-electrical properties of surface features’, the different land cover may appear similar in radar images. For discriminating perennial cocoa agroforestry land cover, we compare a multi-spectral optical image from RapidEye, acquired in the dry season, and multi-seasonal C-band SAR of Sentinel 1: A final set of 10 (out of 50) images that represent six dry and four wet seasons from 2015 to 2017. We ran eight RF models for different input band combinations; multi-spectral reflectance, vegetation indices, co-(VV) and cross-(VH) polarised SAR intensity and Grey Level Co-occurrence Matrix (GLCM) texture measures. Following a pixel-based image analysis, we evaluated accuracy metrics and uncertainty Shannon entropy. The model comprising co- and cross-polarised texture bands had the highest accuracy of 88.07 % (95 % CI: 85.52–90.31) and kappa of 85.37; and the low class uncertainty for perennial agroforests and transition forests. The optical image had low classification uncertainty for the entire image; but, it performed better in discriminating non-vegetated areas. The measured uncertainty provides reliable validation for comparing class discrimination from different image resolution. The GLCM texture measures that are crucial in delineating vegetation cover differed for the season and polarization of SAR image. Given the high accuracies of mapping, our approach has value for landscape monitoring; and, an improved valuation of agroforestry contribution to REDD+ strategies in the Congo basin sub-region
    • …
    corecore